4 research outputs found
Memory Checking for Parallel RAMs
When outsourcing a database to an untrusted remote server, one might want to verify the integrity of contents while accessing it. To solve this, Blum et al. [FOCS `91] propose the notion of memory checking. Memory checking allows a user to run a RAM program on a remote server, with the ability to verify integrity of the storage with small local storage.
In this work, we define and initiate the formal study of memory checking for Parallel RAMs (PRAMs). The parallel RAM model is very expressive and captures many modern architectures such as multi-core architectures and cloud clusters. When multiple clients run a PRAM algorithm on a shared remote server, it is possible that there are concurrency issues that cause inconsistencies. Therefore, integrity verification is even more desirable property in this setting.
Assuming only the existence of one-way functions, we construct an online memory checker (one that reports faults as soon as they occur) for PRAMs with simulation overhead in both work and depth. In addition, we construct an offline memory checker (one that reports faults only after a long sequence of operations) with amortized simulation overhead in both work and depth. Our constructions match the best known simulation overhead of the memory checkers in the standard single-user RAM setting. As an application of our parallel memory checking constructions, we additionally construct the first maliciously secure oblivious parallel RAM (OPRAM) with polylogarithmic overhead
MacORAMa: Optimal Oblivious RAM with Integrity
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM `96), is a primitive that allows a client to perform RAM computations on an external database without revealing any information through the access pattern. For a database of size , well-known lower bounds show that a multiplicative overhead of in the number of RAM queries is necessary assuming client storage. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO `21) with worst-case overhead and client storage. However, this optimal ORAM is known to be secure only in the honest-but-curious setting, where an adversary is allowed to observe the access patterns but not modify the contents of the database. In the malicious setting, where an adversary is additionally allowed to tamper with the database, this construction and many others in fact become insecure.
In this work, we construct the first maliciously secure ORAM with worst-case overhead and client storage assuming one-way functions, which are also necessary. By the lower bound, our construction is asymptotically optimal. To attain this overhead, we develop techniques to intricately interleave online and offline memory checking for malicious security. Furthermore, we complement our positive result by showing the impossibility of a generic overhead-preserving compiler from honest-but-curious to malicious security, barring a breakthrough in memory checking
Listing, Verifying and Counting Lowest Common Ancestors in DAGs: Algorithms and Fine-Grained Lower Bounds
The AP-LCA problem asks, given an -node directed acyclic graph (DAG), to
compute for every pair of vertices and in the DAG a lowest common
ancestor (LCA) of and if one exists. In this paper we study several
interesting variants of AP-LCA, providing both algorithms and fine-grained
lower bounds for them. The lower bounds we obtain are the first conditional
lower bounds for LCA problems higher than , where is
the matrix multiplication exponent. Some of our results include:
- In any DAG, we can detect all vertex pairs that have at most two LCAs and
list all of their LCAs in time. This algorithm extends a result
of [Kowaluk and Lingas ESA'07] which showed an time
algorithm that detects all pairs with a unique LCA in a DAG and outputs their
corresponding LCAs.
- Listing LCAs per vertex pair in DAGs requires time under
the popular assumption that 3-uniform 5-hyperclique detection requires
time. This is surprising since essentially cubic time is
sufficient to list all LCAs (if ).
- Counting the number of LCAs for every vertex pair in a DAG requires
time under the Strong Exponential Time Hypothesis, and
time under the -Clique hypothesis. This shows that
the algorithm of [Echkardt, M\"{u}hling and Nowak ESA'07] for listing all LCAs
for every pair of vertices is likely optimal.
- Given a DAG and a vertex for every vertex pair , verifying
whether all are valid LCAs requires time assuming
3-uniform 4-hyperclique requires time. This defies the common
intuition that verification is easier than computation since returning some LCA
per vertex pair can be solved in time [Grandoni et al. SODA'21].Comment: To appear in ICALP 2022. Abstract shortened to fit arXiv requiremen